19 research outputs found

    Feature Extraction using Spiking Convolutional Neural Networks

    Get PDF
    Spiking neural networks are biologically plausible counterparts of the artificial neural networks, artificial neural networks are usually trained with stochastic gradient descent and spiking neural networks are trained with spike timing dependant plasticity. Training deep convolutional neural networks is a memory and power intensive job. Spiking networks could potentially help in reducing the power usage. There is a large pool of tools for one to chose to train artificial neural networks of any size, on the other hand all the available tools to simulate spiking neural networks are geared towards computational neuroscience applications and they are not suitable for real life applications. In this work we focus on implementing a spiking CNN using Tensorflow to examine behaviour of the network and study catastrophic forgetting in the spiking CNN and weight initialization problem in R-STDP using MNIST data set. We also report classification accuracies that are achieved using N-MNIST and MNIST data sets

    The Effect of a Temperature-Dependent Viscosity on Cooling Droplet-Droplet Collisions

    Get PDF
    A detailed understanding of the collision dynamics of liquid droplets is relevant to natural phenomena and industrial applications. These droplets could experience temperature changes altering their physical properties, which affect the droplet collisions. As viscosity is one of the relevant physical properties, this study focuses on the effect of temperature on viscosity, with an Arrhenius temperature dependence, of collisions of two equal-sized droplets using the Volume of Fluid Method. The results show that the higher temperature of the droplets leads to an effectively lower viscosity, leading to increased interface oscillations. This leads to the onset of separation at lower Weber numbers as expected. The local cooling droplets will create a local viscosity profiles, which results in the formation of a ridge upon combination of droplets. In addition, the collision outcomes sometimes cannot be explained solely on basis of an effective viscosity, undermining the usefulness of existing collision regime maps

    Bio-Inspired Multi-Layer Spiking Neural Network Extracts Discriminative Features from Speech Signals

    Full text link
    Spiking neural networks (SNNs) enable power-efficient implementations due to their sparse, spike-based coding scheme. This paper develops a bio-inspired SNN that uses unsupervised learning to extract discriminative features from speech signals, which can subsequently be used in a classifier. The architecture consists of a spiking convolutional/pooling layer followed by a fully connected spiking layer for feature discovery. The convolutional layer of leaky, integrate-and-fire (LIF) neurons represents primary acoustic features. The fully connected layer is equipped with a probabilistic spike-timing-dependent plasticity learning rule. This layer represents the discriminative features through probabilistic, LIF neurons. To assess the discriminative power of the learned features, they are used in a hidden Markov model (HMM) for spoken digit recognition. The experimental results show performance above 96% that compares favorably with popular statistical feature extraction methods. Our results provide a novel demonstration of unsupervised feature acquisition in an SNN

    Temporal Convolution in Spiking Neural Networks: a Bio-mimetic Paradigm

    Get PDF
    Abstract Recent spectacular advances in Artificial Intelligence (AI), in large, be attributed to developments in Deep Learning (DL). In essence, DL is not a new concept. In many respects, DL shares characteristics of “traditional” types of Neural Network (NN). The main distinguishing feature is that it uses many more layers in order to learn increasingly complex features. Each layer convolutes into the previous by simplifying and applying a function upon a subsection of that layer. Deep Learning’s fantastic success can be attributed to dedicated researchers experimenting with many different groundbreaking techniques, but also some of its triumph can also be attributed to fortune. It was the right technique at the right time. To function effectively, DL mainly requires two things: (a) vast amounts of training data and (b) a very specific type of computational capacity. These two respective requirements have been amply met with the growth of the internet and the rapid development of GPUs. As such DL is an almost perfect fit for today’s technologies. However, DL is only a very rough approximation of how the brain works. More recently, Spiking Neural Networks (SNNs) have tried to simulate biological phenomena in a more realistic way. In SNNs information is transmitted as discreet spikes of data rather than a continuous weight or a differentiable activation function. In practical terms this means that far more nuanced interactions can occur between neurons and that the network can run far more efficiently (e.g. in terms of calculations needed and therefore overall power requirements). Nevertheless, the big problem with SNNs is that unlike DL it does not “fit” well with existing technologies. Worst still is that no one has yet come up with definitive way to make SNNs function at a “deep” level. The difficulty is that in essence "deep" and "spiking" refer to fundamentally different characteristics of a neural network: "spiking" focuses on the activation of individual neurons, whereas "deep" concerns itself to the network architecture itself [1]. However, these two methods are in fact not contradictory, but have so far been developed in isolation from each other due to the prevailing technology driving each technique and the fundamental conceptual distance between each of the two biological paradigms. If advances in AI are to continue at the present rate that new technologies are going to be developed and the contradictory aspects of DL and SNN are going to have to be reconciled. Very recently, there have been a handful of attempts to amalgamate DL and SNN in a variety of ways [2]-one of the most exciting being the creation of a specific hierarchical learning paradigm in Recurrent SNN (RSNNs) called e-prop [3]. However, this paper posits that this has been made problematic because a fundamental agent in the way the biological brain functions has been missing from each paradigm, and that if this is included in a new model then the union between DL and RSNN can be made in a more harmonious manner. The missing piece to the jigsaw, in fact, is the glial cell and the unacknowledged function it plays in neural processing. In this context, this paper examines how DL and SNN can be combined, and how glial dynamics cannot only address outstanding issues with the existing individual paradigms - for example the “weight transport” problem - but also act as the “glue” – e.g. pun intended - between these two paradigms. This idea has direct parallel with the idea of convolution in DL but has the added dimension of time. It is important not only where events happen but also when events occur in this new paradigm. The synergy between these two powerful paradigms give hints at the direction and potential of what could be an important part of the next wave of development in AI

    In-plane Failure Analysis of URM Structures Based on Strain Hardening and Softening in the Multilaminate Framework

    No full text
    This paper presents a macro model to predict unreinforced masonry structures in plane behavior. The model is based on the concept of multilaminate theory. In the past, the method has been used to model behavior of soil, disregarding the cohesion and the tensile strength. Regarding its mathematical base, and the possibility of applying in other cases, this method is used to predict the ultimate failur load in URM structures in present study. This model is intrinsically capable of spotting induced anisotropy of brittle material such as concrete, rocks and masonry, develponig as a result of cracking. Here, the yield surface applied, consists an generalized mohr-coulomb yield surface, along with a cap model and a cut-off tensile. Comparing numerical results predicted to be obtained in non-linear analysis of masonry structures unreinforced against lateral load, with the results of ther experimental data shows capability of the model in failure analysis of URM structures

    A Spiking Neural Architecture for Vector Quantization and Clustering

    No full text
    International audienceAlthough a couple of spiking neural network (SNN) architec-tures have been developed to perform vector quantization, good performances remains hard to attain. Moreover these architectures make use of rate codes that require an unplausible high number of spikes and consequently a high energetical cost. This paper presents for the first time a SNN architecture that uses temporal codes, more precisely first-spike latency code, while performing competitively with respect to the state-of-the-art visual coding methods. We developed a novel spike-timing-dependent plasticity (STDP) rule able to efficiently learn first-spike la-tency codes. This event-based rule is integrated in a two-layer SNN architecture of leaky integrate-and-fire (LIF) neurons. The first layer encodes a real-valued input vector in a spatio-temporal spike pattern, thus producing a temporal code. The second layer implements a distance-dependent lateral interaction profile making competitive and cooperative processes able to operate. The STDP rule operates between those two layers so as to learn the inputs by adapting the synaptic weights. State-of-the art performances are demonstrated on the MNIST and natural image datasets

    Deep Learning of EEG Data in the NeuCube Brain-Inspired Spiking Neural Network Architecture for a Better Understanding of Depression

    No full text
    In the recent years, machine learning and deep learning techniques are being applied on brain data to study mental health. The activation of neurons in these models is static and continuous-valued. However, a biological neuron processes the information in the form of discrete spikes based on the spike time and the firing rate. Understanding brain activities is vital to understand the mechanisms underlying mental health. Spiking Neural Networks are offering a computational modelling solution to understand complex dynamic brain processes related to mental disorders, including depression. The objective of this research is modeling and visualizing brain activity of people experiencing symptoms of depression using the SNN NeuCube architecture. Resting EEG data was collected from 22 participants and further divided into groups as healthy and mild-depressed. NeuCube models have been developed along with the connections across different brain regions using Synaptic Time Dependent plasticity (STDP) learning rule for healthy and depressed individuals. This unsupervised learning revealed some distinguishable patterns in the models related to the frontal, central and parietal areas of the depressed versus the control subjects that suggests potential markers for early depression prediction. Traditional machine learning techniques, including MLP methods have been also employed for classification and prediction tasks on the same data, but with lower accuracy and fewer new information gained
    corecore